Skip to content

Remove arguments of _forward_eval_ϵ #2736

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 28, 2025
Merged

Remove arguments of _forward_eval_ϵ #2736

merged 3 commits into from
Apr 28, 2025

Conversation

blegat
Copy link
Member

@blegat blegat commented Apr 25, 2025

They are always the same anyway. I wonder if there is any performance benefit of saving these interpret calls. Is there a standard benchmark used for checking for regressions ?
Maybe https://github.com/lanl-ansi/rosetta-opf/blob/main/jump.jl ?

@odow
Copy link
Member

odow commented Apr 25, 2025

@odow
Copy link
Member

odow commented Apr 25, 2025

Or build that model and then manually evaluate the Hessian with MOI.eval_hessian_lagrangian etc.

@odow
Copy link
Member

odow commented Apr 25, 2025

When I refactored this out of JuMP, I tried to make the fewest possible changes. The code can be cleaned up lots.

@blegat
Copy link
Member Author

blegat commented Apr 25, 2025

Or build that model and then manually evaluate the Hessian with MOI.eval_hessian_lagrangian etc.

Yes, we could add a reduced version to MOI.Benchmarks

@odow
Copy link
Member

odow commented Apr 28, 2025

Using

using Revise
import Ipopt
import JuMP
import MathOptInterface as MOI
import PGLib
import PowerModels

model = JuMP.direct_model(Ipopt.Optimizer())
pm = PowerModels.instantiate_model(
    PGLib.pglib("pglib_opf_case10000_goc"),
    PowerModels.ACPPowerModel,
    PowerModels.build_opf;
    jump_model = model,
);

ipopt = JuMP.backend(model)
x = MOI.get(ipopt, MOI.ListOfVariableIndices())
m, n = length(ipopt.nlp_model.constraints), length(x)

evaluator = MOI.Nonlinear.Evaluator(
    ipopt.nlp_model,
    MOI.Nonlinear.SparseReverseMode(),
    x,
)
MOI.initialize(evaluator, [:Grad, :Jac, :Hess])

H_struct = MOI.hessian_lagrangian_structure(evaluator)
H = zeros(length(H_struct))
mu = rand(m)
sigma = 0.0
x_v = rand(n)
@time MOI.eval_hessian_lagrangian(evaluator, H, x_v, sigma, mu)

Before

julia> x_v = rand(n);

julia> @time MOI.eval_hessian_lagrangian(evaluator, H, x_v, sigma, mu)
  6.617456 seconds (158.26 k allocations: 11.273 MiB)

julia> x_v = rand(n);

julia> @time MOI.eval_hessian_lagrangian(evaluator, H, x_v, sigma, mu)
  6.661574 seconds (158.26 k allocations: 11.273 MiB)

After

julia> x_v = rand(n);

julia> @time MOI.eval_hessian_lagrangian(evaluator, H, x_v, sigma, mu)
  6.644427 seconds (158.26 k allocations: 11.273 MiB)

julia> x_v = rand(n);

julia> @time MOI.eval_hessian_lagrangian(evaluator, H, x_v, sigma, mu)
  6.850976 seconds (158.26 k allocations: 11.273 MiB)

Seems okay to me. No difference.

@odow odow merged commit bd0bf71 into master Apr 28, 2025
30 of 31 checks passed
@odow odow deleted the bl/simplify_forward branch April 28, 2025 01:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants